ShiftDelete.Net Global

Whisper Leak exposes AI chat topics from encrypted traffic

Ana sayfa / AI

Microsoft has detailed a troubling new exploit, dubbed Whisper Leak, that allows attackers to infer sensitive AI chat topics even through encrypted traffic. The side-channel attack targets streaming language models, raising fresh concerns over the privacy of conversations with AI.

Despite encryption via HTTPS, Whisper Leak takes advantage of how large language models stream their output. These streams produce unique patterns in packet size and timing. According to Microsoft researchers, attackers who observe encrypted TLS traffic can train classifiers to identify the topic of a user’s prompt based solely on these patterns.

That means a passive observer such as someone sharing your Wi-Fi, a compromised ISP node, or even a government agency could infer if you’re asking questions about politically sensitive or illegal topics.

Metroid Prime 2’s Brutal Two-Year Development Crunch Revisited by Nintendo

Nintendo reveals Metroid Prime 2 was built in just two years, with no delays and daily progress under extreme pressure.

To test how effective the attack is, Microsoft built a proof-of-concept using three machine learning models: LightGBM, Bi-LSTM, and BERT. The results were alarming. Models from OpenAI, Mistral, DeepSeek, and xAI all returned topic classification accuracy above 98%.

That includes:

Over time, this allows attackers to build stronger models that can zero in on specific conversations with high confidence.

Because Whisper Leak gets more effective as more samples are collected, Microsoft warns the attack could become a practical threat for persistent adversaries. When combined with broader surveillance and cross-session data, a cyberattacker could build a more complete profile of a user’s inquiries even across multiple chats.

Thankfully, several companies have already responded. OpenAI, Microsoft, and Mistral have deployed a mitigation that adds a random-length text sequence to each response. This noise obscures token lengths and disrupts timing-based classifiers.

For users, Microsoft suggests the following protections:

The Whisper Leak disclosure arrives alongside new findings on the vulnerability of open-weight LLMs to multi-turn attacks. A separate study from Cisco shows that models like Llama 3.3 and Qwen 3 are easier to exploit over time, while safety-focused systems like Google Gemma 3 hold up better.

These results underline a growing concern: as AI tools become more capable, they also become more fragile. Without security guardrails, developers face real operational risks from data leakage to jailbreak prompts and adversarial instructions.

Whisper Leak may be just one attack vector, but it’s another loud signal: LLM security isn’t optional anymore.

Yorum Ekleyin